Customizable Precision of Floating-Point Arithmetic with Bitslice Vector Types

نویسندگان

  • Shixiong Xu
  • David Gregg
چکیده

Customizing the precision of data can provide attractive trade-offs between accuracy and hardware resources. We propose a novel form of vector computing aimed at arrays of custom-precision floating point data. We represent these vectors in bitslice format. Bitwise instructions are used to implement arithmetic circuits in software that operate on customized bit-precision. Experiments show that this approach can be efficient for vectors of low-precision custom floating point types, while providing arbitrary bit precision.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Effects of Reduced Precision on Floating-Point SVM Classification Accuracy

There is growing interest in performing ever more complex classification tasks on mobile and embedded devices in real-time, which results in the need for efficient implementations of the respective algorithms. Support vector machines (SVMs) represent a powerful class of nonlinear classifiers, and reducing the working precision represents a promising approach to achieving efficient implementatio...

متن کامل

Exploring Approximations for Floating-Point Arithmetic using UppSAT

We consider the problem of solving floating-point constraints obtained from software verification. We present UppSAT — an new implementation of a systematic approximation refinement framework [24] as an abstract SMT solver. Provided with an approximation and a decision procedure (implemented in an off-the-shelf SMT solver), UppSAT yields an approximating SMT solver. Additionally, UppSAT yieldsi...

متن کامل

AVX Acceleration of DD Arithmetic Between a Sparse Matrix and Vector

High precision arithmetic can improve the convergence of Krylov subspace methods; however, it is very costly. One system of high precision arithmetic is double-double (DD) arithmetic, which uses more than 20 double precision operations for one DD operation. We accelerated DD arithmetic using AVX SIMD instructions. The performances of vector operations in 4 threads are 51-59% of peak performance...

متن کامل

FPGA Based Quadruple Precision Floating Point Arithmetic for Scientific Computations

In this project we explore the capability and flexibility of FPGA solutions in a sense to accelerate scientific computing applications which require very high precision arithmetic, based on IEEE 754 standard 128-bit floating-point number representations. Field Programmable Gate Arrays (FPGA) is increasingly being used to design high end computationally intense microprocessors capable of handlin...

متن کامل

Converting floating - point applications to fixed - point

Without knowledge of the issues, the right tools, and a well-thought out development methodology, floating-point applications can in fact suffer the same problems as fixed-point equivalents. The reduced precision and range of fixed-point numbers means that fixed-point applications encounter stability problems more easily than floating-point applications, but both types of applications encounter...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • CoRR

دوره abs/1602.04716  شماره 

صفحات  -

تاریخ انتشار 2016